Co-evolving Multilayer Perceptrons Along Training Sets

نویسندگان

  • Maribel García Arenas
  • Pedro Ángel Castillo Valdivieso
  • Gustavo Romero
  • Fatima Rateb
  • Juan Julián Merelo Guervós
چکیده

When designing artificial neural network (ANN) it is important to optimise the network architecture and the learning coefficients of the training algorithm, as well as the time the network training phase takes, since this is the more timeconsuming phase. In this paper an approach to cooperative co-evolutionary optimisation of multilayer perceptrons (MLP) is presented. The cooperative co-evolution is performed on the MLP and training set at the same time. Results show that this co-evolutionary model reaches an optimal MLP with generalization error comparable to those presented by other authors but using a smaller training set, co-evolved with the system.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Effect of Training Set Size for the Performance of Neural Networks of Classification

Even though multilayer perceptrons and radial basis function networks belong to the class of artificial neural networks and they are used for similar tasks, they have very different structures and training mechanisms. So, some researchers showed better performance with radial basis function networks, while others showed some different results with multilayer perceptrons. This paper compares the...

متن کامل

Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks

In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolut...

متن کامل

Comparing evolutionary hybrid systems for design and optimization of multilayer perceptron structure along training parameters

In this paper, we present a comparative study of several methods that combine evolutionary algorithms and local search methods to optimize multilayer perceptrons: A method that optimizes the architecture and initial weights of multilayer perceptrons; another that searches for training algorithm parameters, and finally, a co-evolutionary algorithm, introduced in this paper, that handles the arch...

متن کامل

Training Multilayer Perceptrons with the Extende Kalman Algorithm

A large fraction of recent work in artificial neural nets uses multilayer perceptrons trained with the back-propagation algorithm described by Rumelhart et. a1. This algorithm converges slowly for large or complex problems such as speech recognition, where thousands of iterations may be needed for convergence even with small data sets. In this paper, we show that training multilayer perceptrons...

متن کامل

Enlarging Training Sets for Neural Networks

A study is presented to compare the performance of multilayer perceptrons, radial basis function networks, and probabilistic neural networks for classification. In many classification problems, probabilistic neural networks have outperformed other neural classifiers. Unfortunately, with this kind of networks, the number of required operations to classify one pattern directly depends on the numb...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004